MRInteract

This is a 15-minute hack to build a toy (read: don't use this for science) MR image viewer using IPython's new interact framework that landed in 2.0.

MR images are simply 3- (or 4) dimensional arrays. In practice we deal with additional data (the transform) that maps voxel indexes to real-word space but I'm not worrying about that here.

Consider this a sandbox to learn & demonstrate some new data visualization libraries that provide support in notebooks.

In [1]:
# conda install ipython-notebook matplotlib
%matplotlib inline
import matplotlib as mpl
mpl.rc('figure', figsize=(10,8))
plt = mpl.pyplot
plt.gray() # MR images have no concept of color
from IPython.html.widgets import interact
# pip install nibabel (standard lib for reading/writing NIFTI images)
import nibabel as nib
<matplotlib.figure.Figure at 0x10363f290>

IPython's interact

This is a toy example to use IPython's interact to view one of these images. In this context, interact takes a function that returns a figure and one or more keyword args. The keyword args are mapped to javascript controls presented in the notebook and the function is wrapped into the kernel and bound to the javascript controls. As the control values change, the function is re-executed and the figure updates.

So let's define a function taking the path to a image. We'll load the data, grab shape information & define three functions to slice the data in different dimensions. The main plot function will execute

In [2]:
def viewer(fname):
    data = nib.load(fname).get_data()
    nx, ny, nz = data.shape
    
    # yuck, there's got to be a better way
    def axial(s):
        return data[:, :, s]
    def coronal(s):
        return data[:, s, :]
    def sagittal(s):
        return data[s, :, :]

    # map view names to proper slicing function
    v2f = {'axial': axial,
           'coronal': coronal,
           'sagittal': sagittal}

    def plot(slice, view):
        fig, ax = plt.subplots()
        ax.imshow(v2f[view](slice))
        axes = ax.get_axes()
        axes.get_xaxis().set_ticks([])
        axes.get_yaxis().set_ticks([])
        fig.show()
    
    interact(plot, slice=(0, nz-1), view=('axial', 'coronal', 'sagittal'))

Now let's view a standard skull-stripped T1 (where white matter > gray matter > CSF in intensity).

In [4]:
viewer('MNI152_T1_1mm_brain.nii.gz')

(Yes, I know the coronal & sagittal views are oriented strangely, this is a toy remember :)

Obviously, IPython's interact makes the interaction between DOM form element<-->javascript<-->kernel extremely easy. A couple of notes:

  • I'm not sure how to alter the range of slice when the view changes. The dimensions of the image are not equal and as of right now you can't see the posterior most brain in the coronal view.
  • This could generalize easily to 4-D datasets to sweep through time (fMRI) or gradients (DTI).
  • It requires a running kernel.
  • It's slow, most likely due to round trip between notebook and kernel.

Static Rendering

Enter Jake van der Plas' ipywidgets. With StaticInteract, the figures are saved into the notebook and hence are available to interact with even when rendering the notebook statically. StaticInteract quickly manipulates to the DOM elements to display only the correct image. This DOM manipulation happens very quickly.

We'll do mostly the same thing as above.

In [8]:
# pip install -e 'git+git@github.com:jakevdp/ipywidgets.git@master#egg=ipywidgets'
from ipywidgets import StaticInteract, RangeWidget
mpl.rcParams['figure.max_open_warning'] = 200  # supress warning for many figures

data = nib.load('MNI152_T1_1mm_brain.nii.gz').get_data()
nz = data.shape[2]
def axial_viewer(slice):
    fig, ax = plt.subplots()
    ax.set_title('Axial Slice {:03d}'.format(slice))
    ax.imshow(data[:, :, slice])
    axes = ax.get_axes()
    axes.get_xaxis().set_ticks([])
    axes.get_yaxis().set_ticks([])
    return fig

This function takes about a minute to finish because it's rendering all the views given by the slice keyword and pushing them into the DOM (that doesn't take very long).

In [9]:
StaticInteract(axial_viewer, slice=RangeWidget(0, nz-1))
Out[9]:
slice:

Moving through the brain renders extremely quickly without requiring a kernel so should work in nbviewer.

We have to make a space trade-off though, as this notebook currently weighs in near 2.5MB. Statically rendering all three views would roughly triple the size and including the 4th dimension is mostly out of the question. Perhaps someone can prove me wrong?

Other libs

The next project I want to check out is bokeh from the folks at Continuum.

MPLD3 is also from Jake van der Plas (he's a machine!). It combines matplotlib's visualization language with the javascript graphing engine D3. As far as I can tell, it doesn't provide the interaction component in the previous examples but it does provide figure zooming and paning.

Conclusion

I'm clearly standing on the shoulders of giants here. None of this is possible without these giants:

  • The whole IPython dev team
  • Jake van der Plas

  • Matthew Brett and the other NIPy devs (for nibabel)